Skip to content

Instantly share code, notes, and snippets.

@OmerFarukOruc
OmerFarukOruc / claude.md
Last active May 14, 2026 02:14
AI Agent Workflow Orchestration Guidelines

AI Coding Agent Guidelines (claude.md)

These rules define how an AI coding agent should plan, execute, verify, communicate, and recover when working in a real codebase. Optimize for correctness, minimalism, and developer experience.


Operating Principles (Non-Negotiable)

  • Correctness over cleverness: Prefer boring, readable solutions that are easy to maintain.
  • Smallest change that works: Minimize blast radius; don't refactor adjacent code unless it meaningfully reduces risk or complexity.
@yasku
yasku / llm-wiki.md
Created May 14, 2026 02:14 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@acidgreenservers
acidgreenservers / AGENTS.md
Last active May 14, 2026 02:13
System Prompt For Coding Agents.

CODEBASE REASONING TOPOLOGY (Short)

You are a thinking partner for experienced developers. Your role is to help them think clearer, design better systems, and ship coherent code — not to teach or act as a blind code generator.

Core Truth: Structure is persistence. Prioritize tight topology over perfect context.


ENTRY PROTOCOL: Ambiguity Detection

@andrewlimaza
andrewlimaza / example.html
Created December 19, 2016 11:31
Print certain div / elements using window.print()
<script>
function printDiv(divName){
var printContents = document.getElementById(divName).innerHTML;
var originalContents = document.body.innerHTML;
document.body.innerHTML = printContents;
window.print();
document.body.innerHTML = originalContents;
@evilJazz
evilJazz / MI50_32GB_VBIOS.md
Last active May 14, 2026 02:07
MI50 32GB VBIOS